1
Transitioning Beyond General Prompting
AI011 Lesson 7
00:00

Optimization through Fine-Tuning and Specialized Architectures

1. Beyond the Prompt

While "Few-Shot" prompting is a powerful starting point, scaling AI solutions often requires moving to Supervised Fine-Tuning. This process bakes specific knowledge or behaviors directly into the model's weights.

The Decision: You should only fine-tune when the improvements in response quality and the reduction in token costs outweigh the significant compute and data preparation effort required.

$Cost = Tokens \times Rate$

2. The SLM Revolution

Small Language Models (SLMs) are highly efficient, scaled-down variants of their massive counterparts (e.g., Phi-3.5, Mistral Small). They are trained on highly curated, high-quality data.

Trade-offs: SLMs offer significantly lower latency and enable edge deployment (running locally on devices), but they sacrifice the broad, generalized "human-like" intelligence found in massive LLMs.

3. Specialized Architectures

  • Mixture of Experts (MoE): A technique that scales the total model size while maintaining computational efficiency during inference. Only a subset of "experts" are activated for any given token (e.g., Phi-3.5-MoE).
  • Multimodality: Architectures designed to process text, images, and sometimes audio simultaneously, expanding the use cases beyond text generation (e.g., Llama 3.2).
The Efficiency Hierarchy
Always attempt Prompt Engineering first. If that fails, implement RAG (Retrieval-Augmented Generation). Use Fine-Tuning only as the final advanced optimization step.
model_selection.py
TERMINAL bash — 80x24
> Ready. Click "Run" to execute.
>
Question 1
When does the course recommend proceeding with fine-tuning over prompt engineering?
When the benefits in quality and cost (reduced token usage) outweigh compute effort.
Whenever you need the model to sound more human-like.
As the very first step before trying RAG or prompt engineering.
Only when deploying to an edge device.
Question 2
Which model architecture allows scaling model size while maintaining computational efficiency?
Supervised Fine-Tuning (SFT)
Retrieval-Augmented Generation (RAG)
Mixture of Experts (MoE)
Multimodality
Challenge: Edge Deployment Strategy
Apply your knowledge to a real-world scenario.
You need to deploy a multilingual translation tool that runs locally on a laptop with limited GPU resources.
Task 1
Select the appropriate model family and tokenizer for this multilingual, low-resource task.
Solution:
Mistral NeMo with the Tekken Tokenizer. It is optimized for multilingual text and fits within SLM constraints.
Task 2
Define the deployment framework for high-performance local inference.
Solution:
Use ONNX Runtime or Ollama for local execution to maximize hardware acceleration on the laptop.